Goto

Collaborating Authors

 Federal District


These 59 post-holiday Amazon deals drop kitchen and home upgrades for clearance prices

Popular Science

Save big on robot vacuums, air fryers, air purifiers, kitchen appliances, and tons of other devices to improve your home life. We may earn revenue from the products available on this page and participate in affiliate programs. You survived the holidays, and now you're holding the most powerful post-season artifact: an Amazon gift card. Instead of spending it on a random pile of impulse buys, put it toward upgrades that make your home cleaner, cozier, and easier to live in. If you didn't get what you wanted under the tree, now is the time to get it for yourself.


HPM-KD: Hierarchical Progressive Multi-Teacher Framework for Knowledge Distillation and Efficient Model Compression

Haase, Gustavo Coelho, da Silva, Paulo Henrique Dourado

arXiv.org Artificial Intelligence

Knowledge Distillation (KD) has emerged as a promising technique for model compression but faces critical limitations: (1) sensitivity to hyperparameters requiring extensive manual tuning, (2) capacity gap when distilling from very large teachers to small students, (3) suboptimal coordination in multi-teacher scenarios, and (4) inefficient use of computational resources. We present \textbf{HPM-KD}, a framework that integrates six synergistic components: (i) Adaptive Configuration Manager via meta-learning that eliminates manual hyperparameter tuning, (ii) Progressive Distillation Chain with automatically determined intermediate models, (iii) Attention-Weighted Multi-Teacher Ensemble that learns dynamic per-sample weights, (iv) Meta-Learned Temperature Scheduler that adapts temperature throughout training, (v) Parallel Processing Pipeline with intelligent load balancing, and (vi) Shared Optimization Memory for cross-experiment reuse. Experiments on CIFAR-10, CIFAR-100, and tabular datasets demonstrate that HPM-KD: achieves 10x-15x compression while maintaining 85% accuracy retention, eliminates the need for manual tuning, and reduces training time by 30-40% via parallelization. Ablation studies confirm independent contribution of each component (0.10-0.98 pp). HPM-KD is available as part of the open-source DeepBridge library.


GENIUS: An Agentic AI Framework for Autonomous Design and Execution of Simulation Protocols

Soleymanibrojeni, Mohammad, Aydin, Roland, Guedes-Sobrinho, Diego, Dias, Alexandre C., Piotrowski, Maurício J., Wenzel, Wolfgang, Rêgo, Celso Ricardo Caldeira

arXiv.org Artificial Intelligence

Computational simulations have revolutionized materials design, accelerating innovation by allowing researchers to explore material properties and their behaviors virtually before experimental validation[1-4]. This shift has led to significant breakthroughs that range from energy storage[5, 6] to pharmaceutical development[7, 8]. However, a persistent challenge undermines this potential: the technical barriers to effective simulation setup disproportionately burden researchers, particularly those whose expertise lies in experimental rather than computational domains. When scientists identify a promising new compound, understanding its fundamental properties often requires computational validation. Y et, even seemingly straightforward simulations frequently lead to lengthy technical challenges. Even experienced computational scientists (physicists, chemists, engineers) find themselves diverted from scientific inquiry toward navigating complex programming challenges, engaging in trial-and-error attempts, and struggling with computational setup details rather than focusing on the scientific questions[9]. Integrated Computational Materials Engineering (ICME) has emerged as a robust framework to accelerate materials development by synergizing experimental data, simulations, and theoretical models across multiple scales.


The fight to see clearly through big tech's echo chambers

The Guardian

'The encroachment of technology can feel inevitable.' 'The encroachment of technology can feel inevitable.' The fight to see clearly through big tech's echo chambers Today, I'm mulling over whether to upgrade my iPhone 11 Pro. How to see through Silicon Valley's narrative The encroachment of technology can feel inevitable. It may have always, but increasingly it's a perception bolstered by big tech's own friendly media bubble. But at the same time as big tech's echo chambers are growing louder, so do critical voices from within.


Exploring Performance Variations in Finetuned Translators of Ultra-Low Resource Languages: Do Linguistic Differences Matter?

Gonçalves, Isabel, Cavalin, Paulo, Pinhanez, Claudio

arXiv.org Artificial Intelligence

Finetuning pre-trained language models with small amounts of data is a commonly-used method to create translators for ultra-low resource languages such as endangered Indigenous languages. However, previous works have reported substantially different performances with translators created using similar methodology and data. In this work we systematically explored possible causes of the performance difference, aiming to determine whether it was a product of different cleaning procedures, limitations of the pre-trained models, the size of the base model, or the size of the training dataset, studying both directions of translation. Our studies, using two Brazilian Indigenous languages, related but with significant structural linguistic characteristics, indicated none or very limited influence from those training factors, suggesting differences between languages may play a significant role in the ability to produce translators by fine-tuning pre-trained models.


Quantum Fourier Transform Based Kernel for Solar Irrandiance Forecasting

Mechiche-Alami, Nawfel, Rodriguez, Eduardo, Cardemil, Jose M., Droguett, Enrique Lopez

arXiv.org Machine Learning

This study proposes a Quantum Fourier Transform (QFT)-enhanced quantum kernel for short-term time-series forecasting. Exogenous predictors are incorporated by convexly fusing feature-specific kernels. For both quantum and classical models, the only tuned quantities are the feature-mixing weights and the KRR ridge α; classical hyperparameters (γ, r, d) are fixed, with the same validation set size for all models. Experiments are conducted on a noiseless simulator (5 qubits; window length L=32). Limitations and ablations are discussed, and paths toward NISQ execution are outlined. Introduction Quantum Machine Learning (QML) is an emerging discipline that combines the principles of quantum physics with traditional machine learning (ML) to exploit the distinctive characteristics of quantum systems, including superposition and entanglement phenomena [1]. This distinction facilitates the expeditious execution of certain tasks [2], such as classification and dimensionality reduction, where QML has demonstrated significant acceleration [3]. QML applications have extended to time-series data, leveraging quantum phenomena to model complex temporal dependencies. The goal is to enhance the results of traditional tasks by performing computations on qubits, which can process data more efficiently than classical bits [4, 5]. For example, Thakkar et al. [6] demonstrated that quantum machine-learning methods could enhance financial forecasting by improving both churn prediction and credit-risk assessment. Likewise, Kea et al. [7] developed a hybrid quantum-classical Long Short-Term Memory (QLSTM) to improve stock-price forecasting by leveraging quantum data encoding and high-dimensional quantum representations.


Modeling the Diachronic Evolution of Legal Norms: An LRMoo-Based, Component-Level, Event-Centric Approach to Legal Knowledge Graphs

de Martim, Hudson

arXiv.org Artificial Intelligence

Representing the temporal evolution of legal norms is a critical challenge for automated processing. While foundational frameworks exist, they lack a formal pattern for granular, component-level versioning, hindering the deterministic point-in-time reconstruction of legal texts required by reliable AI applications. This paper proposes a structured, temporal modeling pattern grounded in the LRMoo ontology. Our approach models a norm's evolution as a diachronic chain of versioned F1 Works, distinguishing between language-agnostic Temporal Versions (TV)-each being a distinct Work-and their monolingual Language Versions (LV), modeled as F2 Expressions. The legislative amendment process is formalized through event-centric modeling, allowing changes to be traced precisely. Using the Brazilian Constitution as a case study, we demonstrate that our architecture enables the exact reconstruction of any part of a legal text as it existed on a specific date. This provides a verifiable semantic backbone for legal knowledge graphs, offering a deterministic foundation for trustworthy legal AI.




Privacy-Preserving Personalization in Education: A Federated Recommender System for Student Performance Prediction

Tertulino, Rodrigo, Almeida, Ricardo

arXiv.org Artificial Intelligence

The increasing digitalization of education presents unprecedented opportunities for data-driven personalization, but it also introduces significant challenges to student data privacy. Conventional recommender systems rely on centralized data, a paradigm often incompatible with modern data protection regulations. A novel privacy-preserving recommender system is proposed and evaluated to address this critical issue using Federated Learning (FL). The approach utilizes a Deep Neural Network (DNN) with rich, engineered features from the large-scale ASSISTments educational dataset. A rigorous comparative analysis of federated aggregation strategies was conducted, identifying FedProx as a significantly more stable and effective method for handling heterogeneous student data than the standard FedAvg baseline. The optimized federated model achieves a high-performance F1-Score of 76.28%, corresponding to 92% of the performance of a powerful, centralized XGBoost model. These findings validate that a federated approach can provide highly effective content recommendations without centralizing sensitive student data. Consequently, our work presents a viable and robust solution to the personalization-privacy dilemma in modern educational platforms.